Latent factor model estimation typically relies on either using domain knowledge to manually pick several observed covariates as factor proxies, or purely conducting multivariate analysis such as principal component analysis. However, the former approach may suffer from the bias while the latter can not incorporate additional information. We propose to bridge these two approaches while allowing the number of factor proxies to diverge, and hence make the latent factor model estimation robust, flexible, and statistically more accurate. As a bonus, the number of factors is also allowed to grow. At the heart of our method is a penalized reduced rank regression to combine information. To further deal with heavy-tailed data, a computationally attractive penalized robust reduced rank regression method is proposed. We establish faster rates of convergence compared with the benchmark. Extensive simulations and real examples are used to illustrate the advantages.
translated by 谷歌翻译
个性化决定规则(IDR)是一个决定函数,可根据他/她观察到的特征分配给定的治疗。文献中的大多数现有工作考虑使用二进制或有限的许多治疗方案的设置。在本文中,我们专注于连续治疗设定,并提出跳跃间隔 - 学习,开发一个最大化预期结果的个性化间隔值决定规则(I2DR)。与推荐单一治疗的IDRS不同,所提出的I2DR为每个人产生了一系列治疗方案,使其在实践中实施更加灵活。为了获得最佳I2DR,我们的跳跃间隔学习方法估计通过跳转惩罚回归给予治疗和协变量的结果的条件平均值,并基于估计的结果回归函数来衍生相应的最佳I2DR。允许回归线是用于清晰的解释或深神经网络的线性,以模拟复杂的处理 - 协调会相互作用。为了实现跳跃间隔学习,我们开发了一种基于动态编程的搜索算法,其有效计算结果回归函数。当结果回归函数是处理空间的分段或连续功能时,建立所得I2DR的统计特性。我们进一步制定了一个程序,以推断(估计)最佳政策下的平均结果。进行广泛的模拟和对华法林研究的真实数据应用,以证明所提出的I2DR的经验有效性。
translated by 谷歌翻译
随着知识图的扩散,具有复杂多界结构的建模数据在统计关系学习领域获得了越来越大的关注。统计关系学习最重要的目标之一是链路预测,即,预测知识图中是否存在某些关系。已经提出了大量模型和算法来执行链路预测,其中张量分解方法已经证明在计算效率和预测准确性方面实现了最先进的性能。然而,现有张量分解模型的共同缺点是缺失的关系和非现有关系是以相同的方式对待,这导致信息丢失。为了解决这个问题,我们提出了一种具有探测链路的二进制张量分解模型,其不仅继承了来自经典张量分解模型的计算效率,还占关联数据的二进制性质。我们所提出的探测张量分解(PTF)模型显示了预测准确性和可解释性的优点
translated by 谷歌翻译
个性化医学是针对患者特征量身定制的医学范式,是医疗保健中越来越有吸引力的领域。个性化医学的一个重要目标是根据基线协变量鉴定患者的亚组,而与其他比较治疗相比,从目标治疗中受益更多。当前的大多数亚组识别方法仅着重于获得具有增强治疗效果的亚组,而无需注意亚组大小。但是,临床上有意义的亚组学习方法应确定可以从更好的治疗中受益的患者数量的最大数量。在本文中,我们提出了一项最佳的亚组选择规则(SSR),该规则最大化选定的患者的数量,同时,达到了预先指定的临床意义上有意义的平均结果,例如平均治疗效果。我们基于描述结果中的处理 - 果膜相互作用的对比函数,得出了最佳SSR的两种等效理论形式。我们进一步提出了一个受约束的策略树搜索算法(资本),以在可解释的决策树类中找到最佳SSR。所提出的方法是灵活的,可以处理多种限制因素,以惩罚具有负面治疗效果的患者,并使用受限的平均生存时间作为临床上有趣的平均结果来解决事件数据的时间。进行了广泛的模拟,比较研究和实际数据应用,以证明我们方法的有效性和实用性。
translated by 谷歌翻译
我们考虑在具有多个可用的多个辅助来源的主要兴趣样本中最佳决策问题。感兴趣的结果是有限的,因为它仅在主要样本中观察到。实际上,这种多个数据源可能属于异质研究,因此不能直接组合。本文提出了一种新的框架来处理异构研究,并通过新的校准最佳决策(CODA)方法同时解决有限的结果,通过利用多种数据来源的常见中间结果来解决。具体地,CODA允许跨不同样品的基线协变量具有均匀或异质的分布。在温和和可测试的假设下,不同样本中的中间结果的条件方法等于基线协变量和治疗信息,我们表明,条件平均结果的提议CODA估计是渐近正常的和更有效的,而不是使用主要样品。此外,由于速率双重稳健性,可以使用简单的插件方法轻松获得CODA估计器的方差。对模拟数据集的广泛实验显示了使用CoDa的经验有效性和提高效率,然后是与来自Eicu的辅助数据的主要样本是MIMIC-III数据集的真实应用程序。
translated by 谷歌翻译
我们认为离政策在连续处理设置,如个性化的剂量调查评价(OPE)。在OPE,一个目标来估算下使用不同的决策规则产生的历史数据的新的治疗决策规则中的平均结果。离散处理设置上OPE焦点大多数现有的作品。为了应对持续的治疗,我们开发使用OPE深跳学习一种新的估计方法。我们的方法在于在使用深离散化,通过利用深度学习和多尺度变化点检测自适应离散化治疗领域的主要成分。这使我们能够应用在离散处理现有OPE方法来处理连续治疗。我们的方法是通过理论计算结果,模拟和实际应用程序,以华法林给药进一步合理的。
translated by 谷歌翻译
Video semantic segmentation (VSS) is beneficial for dealing with dynamic scenes due to the continuous property of the real-world environment. On the one hand, some methods alleviate the predicted inconsistent problem between continuous frames. On the other hand, other methods employ the previous frame as the prior information to assist in segmenting the current frame. Although the previous methods achieve superior performances on the independent and identically distributed (i.i.d) data, they can not generalize well on other unseen domains. Thus, we explore a new task, the video generalizable semantic segmentation (VGSS) task that considers both continuous frames and domain generalization. In this paper, we propose a class-wise non-salient region generalized (CNSG) framework for the VGSS task. Concretely, we first define the class-wise non-salient feature, which describes features of the class-wise non-salient region that carry more generalizable information. Then, we propose a class-wise non-salient feature reasoning strategy to select and enhance the most generalized channels adaptively. Finally, we propose an inter-frame non-salient centroid alignment loss to alleviate the predicted inconsistent problem in the VGSS task. We also extend our video-based framework to the image-based generalizable semantic segmentation (IGSS) task. Experiments demonstrate that our CNSG framework yields significant improvement in the VGSS and IGSS tasks.
translated by 谷歌翻译
Generalist models, which are capable of performing diverse multi-modal tasks in a task-agnostic way within a single model, have been explored recently. Being, hopefully, an alternative to approaching general-purpose AI, existing generalist models are still at an early stage, where modality and task coverage is limited. To empower multi-modal task-scaling and speed up this line of research, we release a generalist model learning system, OFASys, built on top of a declarative task interface named multi-modal instruction. At the core of OFASys is the idea of decoupling multi-modal task representations from the underlying model implementations. In OFASys, a task involving multiple modalities can be defined declaratively even with just a single line of code. The system automatically generates task plans from such instructions for training and inference. It also facilitates multi-task training for diverse multi-modal workloads. As a starting point, we provide presets of 7 different modalities and 23 highly-diverse example tasks in OFASys, with which we also develop a first-in-kind, single model, OFA+, that can handle text, image, speech, video, and motion data. The single OFA+ model achieves 95% performance in average with only 16% parameters of 15 task-finetuned models, showcasing the performance reliability of multi-modal task-scaling provided by OFASys. Available at https://github.com/OFA-Sys/OFASys
translated by 谷歌翻译
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a challenging problem to edit the scenes modeled by NeRF-based methods, especially for dynamic scenes. We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes and even support topological changes. Input with an image sequence from a single camera, our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points. Then end-users can edit the scene by easily dragging the key points to desired new positions. To achieve this, we propose a scene analysis method to detect and initialize key points by considering the dynamics in the scene, and a weighted key points strategy to model topologically varying dynamics by joint key points and weights optimization. Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence. Experiments demonstrate that our method achieves high-quality editing on various dynamic scenes and outperforms the state-of-the-art. We will release our code and captured data.
translated by 谷歌翻译
Deep convolutional neural networks have proven their effectiveness, and have been acknowledged as the most dominant method for image classification. However, a severe drawback of deep convolutional neural networks is poor explainability. Unfortunately, in many real-world applications, users need to understand the rationale behind the predictions of deep convolutional neural networks when determining whether they should trust the predictions or not. To resolve this issue, a novel genetic algorithm-based method is proposed for the first time to automatically evolve local explanations that can assist users to assess the rationality of the predictions. Furthermore, the proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models. In the experiments, ResNet is used as an example model to be explained, and the ImageNet dataset is selected as the benchmark dataset. DenseNet and MobileNet are further explained to demonstrate the model-agnostic characteristic of the proposed method. The evolved local explanations on four images, randomly selected from ImageNet, are presented, which show that the evolved local explanations are straightforward to be recognised by humans. Moreover, the evolved explanations can explain the predictions of deep convolutional neural networks on all four images very well by successfully capturing meaningful interpretable features of the sample images. Further analysis based on the 30 runs of the experiments exhibits that the evolved local explanations can also improve the probabilities/confidences of the deep convolutional neural network models in making the predictions. The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME (the state-of-the-art method).
translated by 谷歌翻译